A quality assuring multi-armed bandit crowdsourcing mechanism with incentive compatible learning

نویسندگان

  • Shweta Jain
  • Sujit Gujar
  • Onno Zoeter
  • Y. Narahari
چکیده

We develop a novel multi-armed bandit (MAB) mechanism for the problem of selecting a subset of crowd workers to achieve an assured accuracy for each binary labelling task in a cost optimal way. This problem is challenging because workers have unknown qualities and strategic costs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Incentive Compatible Multi-Armed-Bandit Crowdsourcing Mechanism with Quality Assurance

Consider a requester who wishes to crowdsource a series of identical binary labeling tasks from a pool of workers so as to achieve an assured accuracy for each task, in a cost optimal way. The workers are heterogeneous with unknown but fixed qualities and moreover their costs are private. The problem is to select an optimal subset of the workers to work on each task so that the outcome obtained...

متن کامل

A Truthful Budget Feasible Multi-Armed Bandit Mechanism for Crowdsourcing Time Critical Tasks

Motivated by allocation and pricing problems faced by service requesters on modern crowdsourcing platforms, we study a multi-armed bandit (MAB) problem with several realworld features: (a) the requester wishes to crowdsource a number of tasks but has a fixed budget which leads to a trade-off between cost and quality while allocating tasks to workers; (b) each task has a fixed deadline and a wor...

متن کامل

A Multiarmed Bandit Incentive Mechanism for Crowdsourcing Demand Response in Smart Grids

Demand response is a critical part of renewable integration and energy cost reduction goals across the world. Motivated by the need to reduce costs arising from electricity shortage and renewable energy fluctuations, we propose a novel multiarmed bandit mechanism for demand response (MAB-MDR) which makes monetary offers to strategic consumers who have unknown response characteristics, to inceti...

متن کامل

A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism with Logarithmic Regret

Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored search auctions, crowdsourcing, online procurement, etc. Existing stochastic MAB mechanisms with a deterministic payment rule, proposed in the literature, necessarily suffer a regret of Ω(T ), where T is the number of time steps. This happens because the existing mechanisms consider the worst case scenario where the mea...

متن کامل

An Optimal Bidimensional Multi-Armed Bandit Auction for Multi-unit Procurement

We study the problem of a buyer (aka auctioneer) who gains stochastic rewards by procuring multiple units of a service or item from a pool of heterogeneous strategic agents. The reward obtained for a single unit from an allocated agent depends on the inherent quality of the agent; the agent’s quality is fixed but unknown. Each agent can only supply a limited number of units (capacity of the age...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014